109 research outputs found

    Toward a model for digital tool criticism: Reflection as integrative practice

    Get PDF
    In the past decade, an increasing set of digital tools has been developed with which digital sources can be selected, analyzed, and presented. Many tools go beyond key word search and perform different types of analysis, aggregation, mapping, and linking of data selections, which transforms materials and creates new perspectives, thereby changing the way scholars interact with and perceive their materials. These tools, together with the massive amount of digital and digitized data available for humanities research, put a strain on traditional humanities research methods. Currently, there is no established method of assessing the role of digital tools in the research trajectory of humanities scholars. There is no consensus on what questions researchers should ask themselves to evaluate digital sources beyond those of traditional analogue source criticism. This article aims to contribute to a better understanding of digital tools and the discussion of how to evaluate and incorporate them in research, based on findings from a digital tool criticism workshop held at the 2017 Digital Humanities Benelux conference. The overall goal of this article is to provide insight in the actual use and practice of digital tool criticism, offer a ready-made format for a workshop on digital tool criticism, give insight in aspects that play a role in digital tool criticism, propose an elaborate model for digital tool criticism that can be used as common ground for further conversations in the field, and finally, provide recommendations for future workshops, researchers, data custodians, and tool builders

    Using Prolog as the Fundament for Applications on the Semantic Web

    Get PDF
    This article describes the experiences developing a Semantic Web application entirely in Prolog. The application, a demonstrator that provides access to multiple art collections and linking these using cultural heritage vocabul

    ADDRESSING PUBLISHING ISSUES WITH HYPERMEDIA DISTRIBUTED ON THE WEB

    Get PDF
    The content and structure of an electronically published document can be authored and processed in ways that allow for flexibility in presentation on different environments for different users. This enables authors to craft documents that are more widely presentable. Electronic publishing issues that arise from this separation of document storage from presentation include (1) respecting the intent and restrictions of the author and publisher in the document’s presentation, and (2) applying costs to individual document components and allowing the user to choose among alternatives to control the price of the document’s presentation. These costs apply not only to the individual media components displayed but also to the structure created by document authors to bring these media components together as multimedia. A collection of ISO standards, primarily SGML, HyTime and DSSSL, facilitate the representation of presentation-independent documents and the creation of environments that process them for presentation. SMIL is a W3C format under development for hypermedia documents distributed on the World Wide Web. Since SMIL is SGML-compliant, it can easily be incorporated into SGML/HyTime and DSSSL environments. This paper discusses how to address these issues in the context of presentation-independent hypermedia storage. It introduces the Berlage environment, which uses SGML, HyTime, DSSSL and SMIL to store, process, and present hypermedia data. This paper also describes how the Berlage environment can be used to enforce publisher restrictions on media content and to allow users to control the pricing of document presentations. Also explored is the ability of both SMIL and HyTime to address these issues in general, enabling SMIL and HyTime systems to consistently process documents of different document models authored in different environments

    Do you need experts in the crowd? A case study in image annotation for marine biology

    Get PDF
    Labeled data is a prerequisite for successfully applying machine learning techniques to a wide range of problems. Recently, crowd-sourcing has shown to provide effective solutions to many labeling tasks. However, tasks in specialist domains are difficult to map to Human Intelligence Tasks (or HITs) that can be solved adequately by "the crowd". The question addressed in this paper is whether these specialist tasks can be cast in such a way, that accurate results can still be obtained through crowd-sourcing. We study a case where the goal is to identify fish species in images extracted from videos taken by underwater cameras, a task that typically requires profound domain knowledge in marine biology and hence would be difficult, if not impossible, for the crowd. We show that by carefully converting the recognition task to a visual similarity comparison task, the crowd achieves agreement with the experts comparable to the agreement achieved among experts. Further, non-expert users can learn and improve their performance during the labeling process, e.g., from the system feedback

    Gamesourcing Expert Painting Annotations

    Get PDF
    Online collections provided by museums are increasingly opened for contributions from users outside the museum. These initiatives are mostly targeted at obtaining tags describing aspects of artworks that are common knowledge. This does not require the contributors to have specific skills or knowledge. Museums, however, are also interested in obtaining very specific information about the subject matter of their artworks. We present a game that can help to collect expert knowledge by enabling non-expert users to perform an expert annotation task. This is achieved by simplifying the expert task and providing a sufficient level of annotation support to the users. In a user study we could prove the usefulness of our approach

    Measuring the Effectiveness of Gamesourcing Expert Oil Painting Annotations

    Get PDF
    Tasks that require users to have expert knowledge are diffi- cult to crowdsource. They are mostly too complex to be carried out by non-experts and the available expert

    SWISH DataLab: A Web Interface for Data Exploration and Analysis

    Get PDF
    SWISH DataLab is a single integrated collaborative environment for data processing, exploration and analysis combining Prolog and R. The web interface makes it possible to share the data, the code of all processing steps and the results among researchers; and a versioning system facilitates reproducibility of the research at any chosen point. Using search logs from the National Library of the Netherlands combined with the collection content metadata, we demonstrate how to use SWISH DataLab for all stages of data analysis, using Prolog predicates, graph visualizations, and R

    Thesaurus-based search in large heterogeneous collections

    Get PDF
    In cultural heritage, large virtual collections are coming into existence. Such collections contain heterogeneous sets of metadata and vocabulary concepts, originating from multiple sources. In the context of the E-Culture demonstrator we have shown earlier that such virtual collections can be effectively explored with keyword search and semantic clustering. In this paper we describe the design rationale of ClioPatria, an open-source system which provides APIs for scalable semantic graph search. The use of ClioPatria’s search strategies is illustrated with a realistic use case: searching for ”Picasso”. We discuss details of scalable graph search, the required OWL reasoning functionalities and show why SPARQL queries are insufficient for solving the search problem

    ClioPatria: A SWI-Prolog Infrastructure for the Semantic Web

    Get PDF
    ClioPatria is a comprehensive semantic web development framework based on SWI-Prolog. SWI-Prolog provides an efficient C-based main-memory RDF store that is designed to cooperate naturally and efficiently with Prolog, realizing a flexible RDF-based environment for rule based programming. ClioPatria extends this core with a SPARQL and LOD server, an extensible web frontend to manage the server, browse the data, query the data using SPARQL and Prolog and a Git-based plugin manager. The ability to query RDF using Prolog provides query composition and smooth integration with application logic. ClioPatria is primarily positioned as a prototyping platform for exploring novel ways of reasoning with RDF data. It has been used in several research projects in order to perform tasks such as data integration and enrichment and semantic search
    • …
    corecore